43 research outputs found

    An integrative risk assessment approach for persistent chemicals: A case study on dioxins, furans and dioxin-like PCBs in France

    Get PDF
    a b s t r a c t For persistent chemicals slowly eliminated from the body, the accumulated concentration (body burden), rather than the daily exposure, is considered the proper starting point for the risk assessment. This work introduces an integrative approach for persistent chemical risk assessment by means of a dynamic body burden approach. To reach this goal a Kinetic Dietary Exposure Model (KDEM) was extended with the long term time trend in the exposure (historic exposure) and the comparison of bioaccumulation with body burden references for toxicity. The usefulness of the model was illustrated on the dietary exposure to PolyChlorinatedDibenzo-p-Dioxins (PCDDs), PolyChlorinatedDibenzoFurans (PCDFs) and PolyChlorinated Biphenyls (PCBs) in France. Firstly the dietary exposure to these compounds was determined in 2009 and combined with its long term time trend. In order to take differences between the kinetics of PCDD/F and dl-PCBs into account, three groups of congeners were considered i.e. PCDD/Fs, PCB 126 and remaining dl-PCBs. The body burden was compared with reference body burdens corresponding to reproductive, hepatic and thyroid toxicity. In the case of thyroid toxicity this comparison indicated that in 2009 the probability of the body burden to exceed its reference ranged from 2.8% (95% CI: 1.5-4.9%) up to 3.9% (95% CI: 2.7-7.1%) (18-29 vs. 60-79 year olds). Notwithstanding the decreasing long-term time trend of the dietary dioxin exposure in France, this probability still is expected to be 1.5% (95% CI: 0.3-2.5%) in 2030 in 60-79 olds. In the case of reproductive toxicity the probability of the 2009 body burden to exceed its reference ranged from 3.1% (95% CI: 1.4-5.0%) (18-29 year olds) to 3.5% (95% CI: 2.2-5.2%) (30-44 year olds). In 2030 this probability is negligible in 18-29 year olds, however small though significant in 30-44 year olds (0.7%, 95% CI: 0-1.6%). In the case of hepatic toxicity the probability in 2009 even in 60-79 year olds already was negligible. In conclusion this approach indicates that in France dioxin levels in food form a declining, though still present, future health risk with respect to thyroid and reproductive toxicity

    The MCRA toolbox of models and data to support chemical mixture risk assessment

    Get PDF
    A model and data toolbox is presented to assess risks from combined exposure to multiple chemicals using probabilistic methods. The Monte Carlo Risk Assessment (MCRA) toolbox, also known as the EuroMix toolbox, has more than 40 modules addressing all areas of risk assessment, and includes a data repository with data collected in the EuroMix project. This paper gives an introduction to the toolbox and illustrates its use with examples from the EuroMix project. The toolbox can be used for hazard identification, hazard characterisation, exposure assessment and risk characterisation. Examples for hazard identification are selection of substances relevant for a specific adverse outcome based on adverse outcome pathways and QSAR models. Examples for hazard characterisation are calculation of benchmark doses and relative potency factors with uncertainty from dose response data, and use of kinetic models to perform in vitro to in vivo extrapolation. Examples for exposure assessment are assessing cumulative exposure at external or internal level, where the latter option is needed when dietary and non-dietary routes have to be aggregated. Finally, risk characterisation is illustrated by calculation and display of the margin of exposure for single substances and for the cumulation, including uncertainties derived from exposure and hazard characterisation estimates.</p

    Modélisation des Co-Expositions aux Pesticides : une Approche Bayésienne Nonparamétrique

    No full text
    This work introduces a specific application of the Bayesian nonparametric methodology in the food risk analysis framework. The goal is to determine mixture of pesticides residues which are simultaneously present in the diet, to give directions for future toxicological experiments for studying possible combined effects of those mixtures. Namely, the joint distribution of the exposures to a large number of pesticides is assessed from the available consumption data and contamination analyses. We propose to model the co-exposures by a Dirichlet process mixture based on a multivariate Gaussian kernel so as to determine clusters of pesticides jointly present in the diet at high doses. The posterior distributions and the optimal partition are computed through a Gibbs sampler based on stick-breaking priors. To reduce computational time due to the high dimensional data, a random block sampling is used. Finally, the clustering among individuals also obtained as an auxiliary output of these analyses is discussed in a risk management perspective.---Ce travail présente l'utilisation d'un modÚle bayésien nonparamétrique dans le cadre de l'évaluation du risque alimentaire. L'objectif est de déterminer les cocktails de pesticides auxquels la population française est exposée afin de mieux appréhender les risques liés à la présence de plusieurs substances chimiques dans l'alimentation. Ainsi, l'exposition de la population aux pesticides est estimée en combinant les données de consommation des aliments avec les données de contamination disponibles des denrées alimentaires. Nous proposons ensuite de modéliser ces co-expositions par un mélange de lois normales multivariées et d'en estimer les différentes composantes en choisissant pour la loi du mélange une distribution a priori selon un processus de Dirichlet, l'objectif étant de déterminer les clusters de pesticides simultanément présents à des niveaux élevés dans le régime alimentaire. Les distributions a posteriori ainsi que la partition optimale sont obtenues par un algorithme de Gibbs appliqué à la représentation "stick-breaking" du processus de Dirichlet. La grande dimension des données induit une convergence lente de l'algorithme que nous tentons de réduire en sous échantillonnant aléatoirement les variables d'expositions (random block Gibbs). Enfin, la classification des individus également obtenue est discutée d'un point de vue de gestion du risque

    Development of a hierarchical Bayesian model to estimate the growth parameters of <em>Listeria monocytogenes</em> in minimally processed fresh leafy salads

    No full text
    International audienceThe optimal growth rate ”opt of Listeria monocytogenes in minimally processed (MP) fresh leafy salads was estimated with a hierarchical Bayesian model at (mean ± standard deviation) 0.33 ± 0.16 h− 1. This ”opt value was much lower on average than that in nutrient broth, liquid dairy, meat and seafood products (0.7–1.3 h− 1), and of the same order of magnitude as in cheese. Cardinal temperatures Tmin, Topt and Tmax were determined at − 4.5 ± 1.3 °C, 37.1 ± 1.3 °C and 45.4 ± 1.2 °C respectively. These parameters were determined from 206 growth curves of L. monocytogenes in MP fresh leafy salads (lettuce including iceberg lettuce, broad leaf endive, curly leaf endive, lamb's lettuce, and mixtures of them) selected in the scientific literature and in technical reports. The adequacy of the model was evaluated by comparing observed data (bacterial concentrations at each experimental time for the completion of the 206 growth curves, mean log10 increase at selected times and temperatures, L. monocytogenes concentrations in naturally contaminated MP iceberg lettuce) with the distribution of the predicted data generated by the model. The sensitivity of the model to assumptions about the prior values also was tested. The observed values mostly fell into the 95% credible interval of the distribution of predicted values. The ”opt and its uncertainty determined in this work could be used in quantitative microbial risk assessment for L. monocytogenes in minimally processed fresh leafy salads

    Using Empirical Likelihood to Combine Data: Application to Food Risk Assessment

    No full text
    This article introduces an original methodology based on empirical likelihood, which aims at combining different food contamination and consumption surveys to provide risk managers with a risk measure, taking into account all the available information. This risk index is defined as the probability that exposure to a contaminant exceeds a safe dose. It is naturally expressed as a nonlinear functional of the different consumption and contamination distributions, more precisely as a generalized U-statistic. This nonlinearity and the huge size of the data sets make direct computation of the problem unfeasible. Using linearization techniques and incomplete versions of the U-statistic, a tractable "approximated" empirical likelihood program is solved yielding asymptotic confidence intervals for the risk index. An alternative "Euclidean likelihood program" is also considered, replacing the Kullback-Leibler distance involved in the empirical likelihood by the Euclidean distance. Both methodologies are tested on simulated data and applied to assess the risk due to the presence of methyl mercury in fish and other seafood

    Peanut traces in food: A probabilistic risk assessment based on the French MIRABEL survey

    No full text
    International audienceThe risk of reactions due to the unintentional presence of allergens, such as traces in packaged products, remains difficult to characterize. The aim was to assess the risk regarding unintended traces of peanut in packaged food products in peanut allergic patients using original data from the MIRABEL survey. We developed an integrated Bayesian probabilistic risk model based on relevant data including consumption of a panel of selected products with and without precautionary labelling by peanut allergic patients, and their individual threshold dose at oral food challenge (OFC). 785 patients (<16 years: 86%) were included in the survey. Data on OFC and food consumption were available for 238 and 443 patients, respectively. For eight food categories with precautionary labelling (30%) or without (70%), the risk was nil (no peanut traces). For chocolate tablets and spreads, the risk was not significantly different from zero. For appetizers, from the different models and including uncertainty intervals, the mean estimated risk was between 38 reactions for 1 000 000 eating occasions and 55 reactions for 10 000 eating occasions. For the 1% lowest dose reactors at OFC, the estimated risk was between 8 reactions for 10 000 and 71 reactions for 1000 eating occasions. According to these results, the allergic risk related to peanut traces in packaged food products was only significant for the most sensitive allergic consumers of appetizers. If the link between food consumption and threshold dose is not taken into account, individual variability could be overlooked, and the risk underestimated. These findings need to be confirmed by larger and representative studies including non-packaged products

    Estimation of Microbial Contamination of Food from Prevalence and Concentration Data: Application to Listeria monocytogenes in Fresh Vegetables

    No full text
    A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed
    corecore